Semiconductor lasers have been rapidly evolving to meet the demands of next-generation optical networks. This imposes much more stringent requirements on the laser reliability, which are dominated by degradation mechanisms (e.g., sudden degradation) limiting the semiconductor laser lifetime. Physics-based approaches are often used to characterize the degradation behavior analytically, yet explicit domain knowledge and accurate mathematical models are required. Building such models can be very challenging due to a lack of a full understanding of the complex physical processes inducing the degradation under various operating conditions. To overcome the aforementioned limitations, we propose a new data-driven approach, extracting useful insights from the operational monitored data to predict the degradation trend without requiring any specific knowledge or using any physical model. The proposed approach is based on an unsupervised technique, a conditional variational autoencoder, and validated using vertical-cavity surface-emitting laser (VCSEL) and tunable edge emitting laser reliability data. The experimental results confirm that our model (i) achieves a good degradation prediction and generalization performance by yielding an F1 score of 95.3%, (ii) outperforms several baseline ML based anomaly detection techniques, and (iii) helps to shorten the aging tests by early predicting the failed devices before the end of the test and thereby saving costs
translated by 谷歌翻译
Semiconductor lasers, one of the key components for optical communication systems, have been rapidly evolving to meet the requirements of next generation optical networks with respect to high speed, low power consumption, small form factor etc. However, these demands have brought severe challenges to the semiconductor laser reliability. Therefore, a great deal of attention has been devoted to improving it and thereby ensuring reliable transmission. In this paper, a predictive maintenance framework using machine learning techniques is proposed for real-time heath monitoring and prognosis of semiconductor laser and thus enhancing its reliability. The proposed approach is composed of three stages: i) real-time performance degradation prediction, ii) degradation detection, and iii) remaining useful life (RUL) prediction. First of all, an attention based gated recurrent unit (GRU) model is adopted for real-time prediction of performance degradation. Then, a convolutional autoencoder is used to detect the degradation or abnormal behavior of a laser, given the predicted degradation performance values. Once an abnormal state is detected, a RUL prediction model based on attention-based deep learning is utilized. Afterwards, the estimated RUL is input for decision making and maintenance planning. The proposed framework is validated using experimental data derived from accelerated aging tests conducted for semiconductor tunable lasers. The proposed approach achieves a very good degradation performance prediction capability with a small root mean square error (RMSE) of 0.01, a good anomaly detection accuracy of 94.24% and a better RUL estimation capability compared to the existing ML-based laser RUL prediction models.
translated by 谷歌翻译
Multilingual Pretrained Language Models (MPLMs) have shown their strong multilinguality in recent empirical cross-lingual transfer studies. In this paper, we propose the Prompts Augmented by Retrieval Crosslingually (PARC) pipeline to improve the zero-shot performance on low-resource languages (LRLs) by augmenting the context with semantically similar sentences retrieved from a high-resource language (HRL) as prompts. PARC improves the zero-shot performance on three downstream tasks (binary sentiment classification, topic categorization and natural language inference) with multilingual parallel test sets across 10 LRLs covering 6 language families in both unlabeled settings (+5.1%) and labeled settings (+16.3%). PARC-labeled also outperforms the finetuning baseline by 3.7%. We find a significant positive correlation between cross-lingual transfer performance on one side, and the similarity between the high- and low-resource languages as well as the amount of low-resource pretraining data on the other side. A robustness analysis suggests that PARC has the potential to achieve even stronger performance with more powerful MPLMs.
translated by 谷歌翻译
我们解决了与行业相关的尺度上的机器人轨迹计划问题。我们的端到端解决方案将高度通用的随机键算法与模型堆叠和集成技术集成在一起,以及用于溶液细化的路径重新链接。核心优化模块由偏置的随机基遗传算法组成。通过与问题依赖性和问题相关模块的独特分离,我们通过约束的天然编码实现了有效的问题表示。我们表明,对替代算法范式(例如模拟退火)的概括是直接的。我们为行业规模的数据集提供数值基准结果。发现我们的方法始终超过贪婪的基线结果。为了评估当今量子硬件的功能,我们使用Amazon Braket上的QBSOLV在量子退火硬件上获得的经典方法进行了补充。最后,我们展示了如何将后者集成到我们的较大管道中,从而为问题提供了量子准备的混合解决方案。
translated by 谷歌翻译
我们展示了如何使用图形神经网络来解决规范的图形着色问题。我们将颜色框架为多类节点分类问题,并基于统计物理Potts模型利用无监督的培训策略。对其他多级问题(例如社区检测,数据聚类和最低集团封面问题)的概括是简单的。我们提供数值基准结果,并通过端到端的应用程序说明了我们的方法,用于在全面的编码程序框架内实现现实世界调度案例。我们的优化方法在PAR或优于现有求解器上执行,并能够扩展到数百万变量的问题。
translated by 谷歌翻译
在临床程序期间的医学成像中的机器学习因扫描仪协议,硬件或政策的变化而受到损害,从而产生异构的采集设置。当训练初始静态训练集的深度学习模型时,模型性能和可靠性遭受采集特征的变化,因为数据和目标可能变得不一致。持续学习可以通过在连续数据流上培训来帮助将模型适应变化环境。然而,医学成像的持续手动专家标签需要大量努力。因此,有效地在新的新示例上有效地使用标签资源的方法是使这一策略可行的必要的。这里,我们提出了一种在多扫描仪设置中在医学图像流上运行的持续主动学习的方法。该方法自动识别图像采集特性中的变化 - 新域 - 选择标签和相应地适应培训的最佳示例。标签受限预算有限,类似典型的真实世界情景。为了证明概括性,我们评估了我们三项任务的方法的有效性:心脏分割,肺结核检测和脑年龄估计。结果表明,建议的方法优于其他主动学习方法,同时有效地抵消灾难性的遗忘。
translated by 谷歌翻译
在感知变化下的自然对象的不变性在突触连接图中的对称性可能在大脑中编码。该图可以通过在不同感知方式的生物学上卓越的过程中通过无监督学习建立。该假设编码方案由自然主义音频和图像数据的相关结构支持,并且它预测了神经连接架构,这与关于主要感觉皮质的许多经验观察一致。
translated by 谷歌翻译
我们表明,可以通过$ 1 $维度统一的输入分配来通过Deep Relu Networks生成有限支持的每$ D $维概率分布。更重要的是,这是可能的,而没有产生成本 - 就瓦斯坦斯坦距离测得的近似错误而言 - 相对于从$ d $ d $独立的随机变量中生成$ d $维的目标分布。这是通过对(Bailey&Telgarsky,2018)中发现的空间填充方法的广泛概括来实现的。我们提出的构造提出了网络深度在推动目标分布与其神经网络近似之间的Wasserstein距离至零之间的重要性。最后,我们发现,对于直方图目标分布,编码相应生成网络所需的位数等于编码概率分布所规定的基本限制。
translated by 谷歌翻译
在本文中,我们通过将Tausch-White小波的构建传输到数据领域来介绍SAMPLET的概念。这样我们就可以获得直接实现数据压缩,奇点检测和适应性的分立数据的多级表示。应用SAMPLETS表示内核矩阵,因为它们出现在基于内核的学习或高斯过程回归中,我们最终以准稀疏矩阵。通过阈值平衡小条目,这些矩阵可压缩为O(n log n)相关条目,其中n是数据点的数量。此功能允许使用填充还原重排蛋白以获得压缩矩阵的稀疏因子。除了全面介绍SAMPLETS及其属性外,我们还提供了广泛的数值研究,以基准方法。我们的结果表明,SAMPLETS在使大型数据集可访问的方向上标记了相当大的步骤。
translated by 谷歌翻译
神经网络理论中最有影响力的结果之一是通用近似定理[1,2,3],其指出,连续函数可以通过单隐藏的层前馈神经网络近似地近似于任意精度。本文的目的是在这种精神上建立一个结果,用于近似通用离散时间线性动力系统 - 包括时变系统 - 通过经常性的神经网络(RNN)。对于线性时间不变(LTI)系统的子类,我们设计了该陈述的定量版本。具体而言,根据[4],通过公制熵测量所考虑的LTI系统的复杂性,我们表明RNN可以最佳地学习 - 或识别系统理论Parlance - 稳定的LTI系统。对于通过差分方程表征其输入输出关系的LTI系统,这意味着RNN可以以度量熵最佳方式从输入输出迹线中学习差分方程。
translated by 谷歌翻译